AAAI AI-Alert Ethics for Sep 28, 2021
New Report Assesses Progress And Risks Of Artificial Intelligence
Artificial intelligence has reached a critical turning point in its evolution, according to a new report by an international panel of experts assessing the state of the field. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people's lives on a daily basis -- from helping people to choose a movie to aiding in medical diagnoses. With that success, however, comes a renewed urgency to understand and mitigate the risks and downsides of AI-driven systems, such as algorithmic discrimination or use of AI for deliberate deception. Computer scientists must work with experts in the social sciences and law to assure that the pitfalls of AI are minimized. Those conclusions are from a report titled "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report," which was compiled by a panel of experts from computer science, public policy, psychology, sociology and other disciplines.
- Health & Medicine > Diagnostic Medicine (0.35)
- Health & Medicine > Therapeutic Area (0.30)
China's new proposed law could strangle the development of AI
The proposed law mandates that companies must use algorithms to "actively spread positive energy." Under the proposal, companies must submit their algorithms to the government for approval or risk being fined and having their service terminated. This is an incredibly bad and even dangerous idea. It's what happens when people who don't understand AI try to regulate AI. Instead of fostering innovation, governments are looking at AI through their unique lenses of fear and trying to reduce the harm they worry about most. Thus, western regulators focus on fears such as violation of privacy, while Chinese regulators are perfectly okay with collecting private data on their citizens but are concerned about AI's ability to influence people in ways deemed undesirable by the government.
- Law > Statutes (1.00)
- Government (1.00)
AI Regulation Is Coming
For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud. Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images--most notably, the European Union's 2018 General Data Protection Regulation (GDPR). Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. Others point out that there's plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon. But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software--particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.
- Europe (1.00)
- Asia > China (0.34)
- North America > United States > New York (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.34)